Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Biomed Opt Express ; 15(4): 2262-2280, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38633090

RESUMO

OCT is a widely used clinical ophthalmic imaging technique, but the presence of speckle noise can obscure important pathological features and hinder accurate segmentation. This paper presents a novel method for denoising optical coherence tomography (OCT) images using a combination of texture loss and generative adversarial networks (GANs). Previous approaches have integrated deep learning techniques, starting with denoising Convolutional Neural Networks (CNNs) that employed pixel-wise losses. While effective in reducing noise, these methods often introduced a blurring effect in the denoised OCT images. To address this, perceptual losses were introduced, improving denoising performance and overall image quality. Building on these advancements, our research focuses on designing an image reconstruction GAN that generates OCT images with textural similarity to the gold standard, the averaged OCT image. We utilize the PatchGAN discriminator approach as a texture loss to enhance the quality of the reconstructed OCT images. We also compare the performance of UNet and ResNet as generators in the conditional GAN (cGAN) setting, as well as compare PatchGAN with the Wasserstein GAN. Using real clinical foveal-centered OCT retinal scans of children with normal vision, our experiments demonstrate that the combination of PatchGAN and UNet achieves superior performance (PSNR = 32.50) compared to recently proposed methods such as SiameseGAN (PSNR = 31.02). Qualitative experiments involving six masked clinical ophthalmologists also favor the reconstructed OCT images with PatchGAN texture loss. In summary, this paper introduces a novel method for denoising OCT images by incorporating texture loss within a GAN framework. The proposed approach outperforms existing methods and is well-received by clinical experts, offering promising advancements in OCT image reconstruction and facilitating accurate clinical interpretation.

2.
J Am Chem Soc ; 146(6): 4134-4143, 2024 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-38317439

RESUMO

Identifying multiple rival reaction products and transient species formed during ultrafast photochemical reactions and determining their time-evolving relative populations are key steps toward understanding and predicting photochemical outcomes. Yet, most contemporary ultrafast studies struggle with clearly identifying and quantifying competing molecular structures/species among the emerging reaction products. Here, we show that mega-electronvolt ultrafast electron diffraction in combination with ab initio molecular dynamics calculations offer a powerful route to determining time-resolved populations of the various isomeric products formed after UV (266 nm) excitation of the five-membered heterocyclic molecule 2(5H)-thiophenone. This strategy provides experimental validation of the predicted high (∼50%) yield of an episulfide isomer containing a strained three-membered ring within ∼1 ps of photoexcitation and highlights the rapidity of interconversion between the rival highly vibrationally excited photoproducts in their ground electronic state.

3.
Stud Health Technol Inform ; 310: 1490-1491, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38269711

RESUMO

We report on the prediction performance of artificial intelligence components embedded into a telehealth platform underlying a newly established eye screening service connecting metropolitan-based ophthalmologists to patients in remote indigenous communities in Northern Territory and Queensland. Two AI-based components embedded into the telehealth platform were evaluated on retinal images collected from 328 unique patients: an image quality alert system and a diabetic retinopathy detection system. Compared to ophthalmologists, at an individual image level, the image quality detection algorithm was correct 72% of the time, and 85% accurate at a patient level. The retinopathy detection algorithm was correct 85% accurate at an individual image level, and 87% accurate at a patient level. This evaluation provides assurances for future service models using AI to complement and support decisions of eye health assessment teams.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Diabetes Mellitus , Retinopatia Diabética , Doenças Retinianas , Humanos , Retinopatia Diabética/diagnóstico por imagem , Inteligência Artificial , Algoritmos
4.
Stud Health Technol Inform ; 310: 911-915, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38269941

RESUMO

D1ental caries remains the most common chronic disease in childhood, affecting almost half of all children globally. Dental care and examination of children living in remote and rural areas is an ongoing challenge that has been compounded by COVID. The development of a validated system with the capacity to screen large numbers of children with some degree of automation has the potential to facilitate remote dental screening at low costs. In this study, we aim to develop and validate a deep learning system for the assessment of dental caries using color dental photos. Three state-of-the-art deep learning networks namely VGG16, ResNet-50 and Inception-v3 were adopted in the context. A total of 1020 child dental photos were used to train and validate the system. We achieved an accuracy of 79% with precision and recall respectively 95% and 75% in classifying 'caries' versus 'sound' with inception-v3.


Assuntos
Aprendizado Profundo , Cárie Dentária , Criança , Humanos , Cor , Cárie Dentária/diagnóstico por imagem , Automação
5.
Sci Rep ; 13(1): 18408, 2023 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-37891238

RESUMO

This paper presents a low computationally intensive and memory efficient convolutional neural network (CNN)-based fully automated system for detection of glaucoma, a leading cause of irreversible blindness worldwide. Using color fundus photographs, the system detects glaucoma in two steps. In the first step, the optic disc region is determined relying upon You Only Look Once (YOLO) CNN architecture. In the second step classification of 'glaucomatous' and 'non-glaucomatous' is performed using MobileNet architecture. A simplified version of the original YOLO net, specific to the context, is also proposed. Extensive experiments are conducted using seven state-of-the-art CNNs with varying computational intensity, namely, MobileNetV2, MobileNetV3, Custom ResNet, InceptionV3, ResNet50, 18-Layer CNN and InceptionResNetV2. A total of 6671 fundus images collected from seven publicly available glaucoma datasets are used for the experiment. The system achieves an accuracy and F1 score of 97.4% and 97.3%, with sensitivity, specificity, and AUC of respectively 97.5%, 97.2%, 99.3%. These findings are comparable with the best reported methods in the literature. With comparable or better performance, the proposed system produces significantly faster decisions and drastically minimizes the resource requirement. For example, the proposed system requires 12 times less memory in comparison to ResNes50, and produces 2 times faster decisions. With significantly less memory efficient and faster processing, the proposed system has the capability to be directly embedded into resource limited devices such as portable fundus cameras.


Assuntos
Glaucoma , Disco Óptico , Humanos , Glaucoma/diagnóstico por imagem , Disco Óptico/diagnóstico por imagem , Fundo de Olho , Redes Neurais de Computação , Técnicas de Diagnóstico Oftalmológico
6.
Eur J Ophthalmol ; : 11206721231199126, 2023 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-37671441

RESUMO

INTRODUCTION: Automated assessment of age-related macular degeneration (AMD) using optical coherence tomography (OCT) has gained significant research attention in recent years. Though a list of convolutional neural network (CNN)-based methods has been proposed recently, methods that uncover the decision-making process of CNNs or critically interpret CNNs' decisions in the context are scant. This study aims to bridge this research gap. METHODS: We independently trained several state-of-the-art CNN models, namely, VGG16, VGG19, Xception, ResNet50, InceptionResNetV2 for AMD detection and applied CNN visualization techniques, namely, Grad-CAM, Grad-CAM++, Score CAM, Faster Score CAM to highlight the regions of interest utilized by the CNNs in the context. Retinal layer segmentation methods were also developed to explore how the CNN regions of interest related to the layers of the retinal structure. Extensive experiments involving 2130 SD-OCT scans collected from Duke University were performed. RESULTS: Experimental analysis shows that Outer Nuclear Layer to Inner Segment Myeloid (ONL-ISM) influences the AMD detection decision heavily as evident from the normalized intersection (NI) scores. For AMD cases the obtained average NI scores were respectively 13.13%, 17.2%, 9.7%, 10.95%, and 11.31% for VGG16, VGG19, ResNet50, Xception, and Inception ResNet V2, whereas, for normal cases, these values were respectively 21.7%, 21.3%, 16.85%, 10.175% and 16%. CONCLUSION: Critical analysis reveals that the ONL-ISM is the most contributing layer in determining AMD, followed by Nerve Fiber Layer to Inner Plexiform Layer (NFL-IPL).

7.
Rev Sci Instrum ; 94(5)2023 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-37219385

RESUMO

We report the modification of a gas phase ultrafast electron diffraction (UED) instrument that enables experiments with both gas and condensed matter targets, where a time-resolved experiment with sub-picosecond resolution is demonstrated with solid state samples. The instrument relies on a hybrid DC-RF acceleration structure to deliver femtosecond electron pulses on the target, which is synchronized with femtosecond laser pulses. The laser pulses and electron pulses are used to excite the sample and to probe the structural dynamics, respectively. The new system is added with capabilities to perform transmission UED on thin solid samples. It allows for cooling samples to cryogenic temperatures and to carry out time-resolved measurements. We tested the cooling capability by recording diffraction patterns of temperature dependent charge density waves in 1T-TaS2. The time-resolved capability is experimentally verified by capturing the dynamics in photoexcited single-crystal gold.

8.
Struct Dyn ; 9(5): 054303, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36267802

RESUMO

Ultrafast electron diffraction (UED) from aligned molecules in the gas phase has successfully retrieved structures of both linear and symmetric top molecules. Alignment of asymmetric tops has been recorded with UED but no structural information was retrieved. We present here the extraction of two-dimensional structural information from simple transformations of experimental diffraction patterns of aligned molecules as a proof-of-principle for the recovery of the full structure. We align 4-fluorobenzotrifluoride with a linearly polarized laser and show that we can distinguish between atomic pairs with equal distances that are parallel and perpendicular to the aligned axis. We additionally show with numerical simulations that by cooling the molecules to a rotational temperature of 1 K, more distances and angles can be resolved through direct transformations.

9.
Vision (Basel) ; 6(3)2022 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-35893762

RESUMO

The aim of the study was to assess various retinal vessel parameters of diabetes mellitus (DM) patients and their correlations with systemic factors in type 2 DM. A retrospective exploratory study in which 21 pairs of baseline and follow-up images of patients affected by DM were randomly chosen from the Sankara Nethralaya−Diabetic Retinopathy Study (SN DREAMS) I and II datasets. Patients' fundus was photographed, and the diagnosis was made based on Klein classification. Vessel thickness parameters were generated using a web-based retinal vascular analysis platform called VASP. The thickness changes between the baseline and follow-up images were computed and normalized with the actual thicknesses of baseline images. The majority of parameters showed 10~20% changes over time. Vessel width in zone C for the second vein was significantly reduced from baseline to follow-up, which showed positive correlations with systolic blood pressure and serum high-density lipoproteins. Fractal dimension for all vessels in zones B and C and fractal dimension for vein in zones A, B and C showed a minimal increase from baseline to follow-up, which had a linear relationship with diastolic pressure, mean arterial pressure, serum triglycerides (p < 0.05). Lacunarity for all vessels and veins in zones A, B and C showed a minimal decrease from baseline to follow-up which had a negative correlation with pulse pressure and positive correlation with serum triglycerides (p < 0.05). The vessel widths for the first and second arteries significantly increased from baseline to follow-up and had an association with high-density lipoproteins, glycated haemoglobin A1C, serum low-density lipoproteins and total serum cholesterol. The central reflex intensity ratio for the second artery was significantly decreased from baseline to follow-up, and positive correlations were noted with serum triglyceride, serum low-density lipoproteins and total serum cholesterol. The coefficients for branches in zones B and C artery and the junctional exponent deviation for the artery in zone A decreased from baseline to follow-up showed positive correlations with serum triglycerides, serum low-density lipoproteins and total serum cholesterol. Identifying early microvascular changes in diabetic patients will allow for earlier intervention, improve visual outcomes and prevent vision loss.

10.
Phys Chem Chem Phys ; 24(25): 15416-15427, 2022 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-35707953

RESUMO

The structural dynamics of photoexcited gas-phase carbon disulfide (CS2) molecules are investigated using ultrafast electron diffraction. The dynamics were triggered by excitation of the optically bright 1B2(1Σu+) state by an ultraviolet femtosecond laser pulse centred at 200 nm. In accordance with previous studies, rapid vibrational motion facilitates a combination of internal conversion and intersystem crossing to lower-lying electronic states. Photodissociation via these electronic manifolds results in the production of CS fragments in the electronic ground state and dissociated singlet and triplet sulphur atoms. The structural dynamics are extracted from the experiment using a trajectory-fitting filtering approach, revealing the main characteristics of the singlet and triplet dissociation pathways. Finally, the effect of the time-resolution on the experimental signal is considered and an outlook to future experiments provided.

11.
Artigo em Inglês | MEDLINE | ID: mdl-35534406

RESUMO

OBJECTIVE: This study aimed to evaluate a deep learning (DL) system using convolutional neural networks (CNNs) for automatic detection of caries on bitewing radiographs. STUDY DESIGN: In total, 2468 bitewings were labeled by 3 dentists to create the reference standard. Of these images, 1257 had caries and 1211 were sound. The Faster region-based CNN was applied to detect the regions of interest (ROIs) with potential lesions. A total of 13,246 ROIs were generated from all 'sound' images, and 50% of 'caries' images (selected randomly) were used to train the ROI detection module. The remaining 50% of 'caries' images were used to validate the ROI detection module. Caries detection was then performed using Inception-ResNet-v2. A set of 3297 'caries' and 5321 'sound' ROIs cropped from the 2468 images was used to train and validate the caries detection module. Data sets were randomly divided into training (90%) and validation (10%) data sets. Recall, precision, specificity, accuracy, and F1 score were used as metrics to assess performance. RESULTS: The caries detection module achieved recall, precision, specificity, accuracy, and F1 scores of 0.89, 0.86, 0.86, 0.87, and 0.87, respectively. CONCLUSIONS: The proposed DL system demonstrated promising performance for detecting proximal surface caries on bitewings.


Assuntos
Aprendizado Profundo , Cárie Dentária , Cárie Dentária/diagnóstico por imagem , Humanos
12.
Dentomaxillofac Radiol ; 51(2): 20210296, 2022 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-34644152

RESUMO

OBJECTIVE: This study aimed to evaluate an automated detection system to detect and classify permanent teeth on orthopantomogram (OPG) images using convolutional neural networks (CNNs). METHODS: In total, 591 digital OPGs were collected from patients older than 18 years. Three qualified dentists performed individual teeth labelling on images to generate the ground truth annotations. A three-step procedure, relying upon CNNs, was proposed for automated detection and classification of teeth. Firstly, U-Net, a type of CNN, performed preliminary segmentation of tooth regions or detecting regions of interest (ROIs) on panoramic images. Secondly, the Faster R-CNN, an advanced object detection architecture, identified each tooth within the ROI determined by the U-Net. Thirdly, VGG-16 architecture classified each tooth into 32 categories, and a tooth number was assigned. A total of 17,135 teeth cropped from 591 radiographs were used to train and validate the tooth detection and tooth numbering modules. 90% of OPG images were used for training, and the remaining 10% were used for validation. 10-folds cross-validation was performed for measuring the performance. The intersection over union (IoU), F1 score, precision, and recall (i.e. sensitivity) were used as metrics to evaluate the performance of resultant CNNs. RESULTS: The ROI detection module had an IoU of 0.70. The tooth detection module achieved a recall of 0.99 and a precision of 0.99. The tooth numbering module had a recall, precision and F1 score of 0.98. CONCLUSION: The resultant automated method achieved high performance for automated tooth detection and numbering from OPG images. Deep learning can be helpful in the automatic filing of dental charts in general dentistry and forensic medicine.


Assuntos
Aprendizado Profundo , Dente , Humanos , Redes Neurais de Computação , Radiografia , Radiografia Panorâmica , Dente/diagnóstico por imagem
13.
Sci Rep ; 11(1): 9704, 2021 05 06.
Artigo em Inglês | MEDLINE | ID: mdl-33958686

RESUMO

Diabetic retinopathy (DR) is a leading cause of blindness and affects millions of people throughout the world. Early detection and timely checkups are key to reduce the risk of blindness. Automated grading of DR is a cost-effective way to ensure early detection and timely checkups. Deep learning or more specifically convolutional neural network (CNN)-based methods produce state-of-the-art performance in DR detection. Whilst CNN based methods have been proposed, no comparisons have been done between the extracted image features and their clinical relevance. Here we first adopt a CNN visualization strategy to discover the inherent image features involved in the CNN's decision-making process. Then, we critically analyze those features with respect to commonly known pathologies namely microaneurysms, hemorrhages and exudates, and other ocular components. We also critically analyze different CNNs by considering what image features they pick up during learning to predict and justify their clinical relevance. The experiments are executed on publicly available fundus datasets (EyePACS and DIARETDB1) achieving an accuracy of 89 ~ 95% with AUC, sensitivity and specificity of respectively 95 ~ 98%, 74 ~ 86%, and 93 ~ 97%, for disease level grading of DR. Whilst different CNNs produce consistent classification results, the rate of picked-up image features disagreement between models could be as high as 70%.


Assuntos
Retinopatia Diabética/diagnóstico por imagem , Redes Neurais de Computação , Algoritmos , Conjuntos de Dados como Assunto , Aprendizado Profundo , Retinopatia Diabética/fisiopatologia , Humanos , Sensibilidade e Especificidade
14.
Front Neurol ; 12: 637000, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33833728

RESUMO

Background: Patient and public involvement (PPI) is an active partnership between the public and researchers in the research process. In dementia research, PPI ensures that the perspectives of the person with "lived experience" of dementia are considered. To date, in many lower- and middle-income countries (LMIC), where dementia research is still developing, PPI is not well-known nor regularly undertaken. Thus, here, we describe PPI activities undertaken in seven research sites across South Asia as exemplars of introducing PPI into dementia research for the first time. Objective: Through a range of PPI exemplar activities, our objectives were to: (1) inform the feasibility of a dementia-related study; and (2) develop capacity and capability for PPI for dementia research in South Asia. Methods: Our approach had two parts. Part 1 involved co-developing new PPI groups at seven clinical research sites in India, Pakistan and Bangladesh to undertake different PPI activities. Mapping onto different "rings" of the Wellcome Trust's "Public Engagement Onion" model. The PPI activities included planning for public engagement events, consultation on the study protocol and conduct, the adaptation of a study screening checklist, development and delivery of dementia training for professionals, and a dementia training programme for public contributors. Part 2 involved an online survey with local researchers to gain insight on their experience of applying PPI in dementia research. Results: Overall, capacity and capability to include PPI in dementia research was significantly enhanced across the sites. Researchers reported that engaging in PPI activities had enhanced their understanding of dementia research and increased the meaningfulness of the work. Moreover, each site reported their own PPI activity-related outcomes, including: (1) changes in attitudes and behavior to dementia and research involvement; (2) best methods to inform participants about the dementia study; (3) increased opportunities to share knowledge and study outcomes; and (4) adaptations to the study protocol through co-production. Conclusions: Introducing PPI for dementia research in LMIC settings, using a range of activity types is important for meaningful and impactful dementia research. To our knowledge, this is the first example of PPI for dementia research in South Asia.

15.
Faraday Discuss ; 228(0): 39-59, 2021 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-33565561

RESUMO

We investigate the fragmentation and isomerization of toluene molecules induced by strong-field ionization with a femtosecond near-infrared laser pulse. Momentum-resolved coincidence time-of-flight ion mass spectrometry is used to determine the relative yield of different ionic products and fragmentation channels as a function of laser intensity. Ultrafast electron diffraction is used to capture the structure of the ions formed on a picosecond time scale by comparing the diffraction signal with theoretical predictions. Through the combination of the two measurements and theory, we are able to determine the main fragmentation channels and to distinguish between ions with identical mass but different structures. In addition, our diffraction measurements show that the independent atom model, which is widely used to analyze electron diffraction patterns, is not a good approximation for diffraction from ions. We show that the diffraction data is in very good agreement with ab initio scattering calculations.

16.
Sci Rep ; 10(1): 7266, 2020 04 29.
Artigo em Inglês | MEDLINE | ID: mdl-32350327

RESUMO

Alterations of Young's modulus (YM) and Poisson's ratio (PR) in biological tissues are often early indicators of the onset of pathological conditions. Knowledge of these parameters has been proven to be of great clinical significance for the diagnosis, prognosis and treatment of cancers. Currently, however, there are no non-invasive modalities that can be used to image and quantify these parameters in vivo without assuming incompressibility of the tissue, an assumption that is rarely justified in human tissues. In this paper, we developed a new method to simultaneously reconstruct YM and PR of a tumor and of its surrounding tissues based on the assumptions of axisymmetry and ellipsoidal-shape inclusion. This new, non-invasive method allows the generation of high spatial resolution YM and PR maps from axial and lateral strain data obtained via ultrasound elastography. The method was validated using finite element (FE) simulations and controlled experiments performed on phantoms with known mechanical properties. The clinical feasibility of the developed method was demonstrated in an orthotopic mouse model of breast cancer. Our results demonstrate that the proposed technique can estimate the YM and PR of spherical inclusions with accuracy higher than 99% and with accuracy higher than 90% in inclusions of different geometries and under various clinically relevant boundary conditions.


Assuntos
Neoplasias da Mama/patologia , Animais , Modelos Animais de Doenças , Técnicas de Imagem por Elasticidade/métodos , Feminino , Humanos , Camundongos , Distribuição de Poisson , Reprodutibilidade dos Testes , Estresse Mecânico
17.
Appl AI Lett ; 1(1)2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36478669

RESUMO

To develop a convolutional neural network visualization strategy so that optical coherence tomography (OCT) features contributing to the evolution of age-related macular degeneration (AMD) can be better determined. We have trained a U-Net model to utilize baseline OCT to predict the progression of geographic atrophy (GA), a late stage manifestation of AMD. We have augmented the U-Net architecture by attaching deconvolutional neural networks (deconvnets). Deconvnets produce the reconstructed feature maps and provide an indication regarding the inherent baseline OCT features contributing to GA progression. Experiments were conducted on longitudinal spectral domain (SD)-OCT and fundus autofluorescence images collected from 70 eyes with GA. The intensity of Bruch's membrane-outer choroid (BMChoroid) retinal junction exhibited a relative importance of 24%, in the GA progression. The intensity of the inner retinal pigment epithelium (RPE) and BM junction (InRPEBM) showed a relative importance of 22%. BMChoroid (where the AMD feature/damage of choriocapillaris was included) followed by InRPEBM (where the AMD feature/damage of RPE was included) are the layers which appear to be most relevant in predicting the progression of AMD.

18.
Sci Rep ; 9(1): 10990, 2019 07 29.
Artigo em Inglês | MEDLINE | ID: mdl-31358808

RESUMO

Age-related macular degeneration (AMD) affects millions of people and is a leading cause of blindness throughout the world. Ideally, affected individuals would be identified at an early stage before late sequelae such as outer retinal atrophy or exudative neovascular membranes develop, which could produce irreversible visual loss. Early identification could allow patients to be staged and appropriate monitoring intervals to be established. Accurate staging of earlier AMD stages could also facilitate the development of new preventative therapeutics. However, accurate and precise staging of AMD, particularly using newer optical coherence tomography (OCT)-based biomarkers may be time-intensive and requires expert training which may not feasible in many circumstances, particularly in screening settings. In this work we develop deep learning method for automated detection and classification of early AMD OCT biomarker. Deep convolution neural networks (CNN) were explicitly trained for performing automated detection and classification of hyperreflective foci, hyporeflective foci within the drusen, and subretinal drusenoid deposits from OCT B-scans. Numerous experiments were conducted to evaluate the performance of several state-of-the-art CNNs and different transfer learning protocols on an image dataset containing approximately 20000 OCT B-scans from 153 patients. An overall accuracy of 87% for identifying the presence of early AMD biomarkers was achieved.


Assuntos
Aprendizado Profundo , Degeneração Macular/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Biomarcadores/análise , Diagnóstico por Computador/métodos , Diagnóstico Precoce , Humanos , Processamento de Imagem Assistida por Computador/métodos , Degeneração Macular/diagnóstico , Drusas Retinianas/diagnóstico , Drusas Retinianas/diagnóstico por imagem
19.
J Glaucoma ; 27(11): 957-964, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30095604

RESUMO

PURPOSE: To evaluate aqueous humor outflow (AHO) in intact eyes of live human subjects during cataract surgery using fluorescein aqueous angiography. METHODS: Aqueous angiography was performed in 8 live human subjects (56 to 86 y old; 2 men and 6 women). After anesthesia, fluorescein (2%) was introduced into the eye [either alone or after indocyanine green (ICG; 0.4%)] from a sterile, gravity-driven constant-pressure reservoir. Aqueous angiographic images were obtained with a Spectralis HRA+OCT and FLEX module (Heidelberg Engineering). Using the same device, anterior-segment optical coherence tomography (OCT) and infrared images were also concurrently taken with aqueous angiography. RESULTS: Fluorescein aqueous angiography in the live human eye showed segmental AHO patterns. Initial angiographic signal was seen on average by 14.0±3.0 seconds (mean±SE). Using multimodal imaging, angiographically positive signal colocalized with episcleral veins (infrared imaging) and intrascleral lumens (anterior-segment OCT). Sequential aqueous angiography with ICG followed by fluorescein showed similar segmental angiographic patterns. DISCUSSION: Fluorescein aqueous angiography in live humans was similar to that reported in nonhuman primates and to ICG aqueous angiography in live humans. As segmental patterns with sequential angiography using ICG followed by fluorescein were similar, these tracers can now be used sequentially, before and after trabecular outflow interventions, to assess their effects on AHO in live human subjects.


Assuntos
Humor Aquoso/metabolismo , Extração de Catarata , Angiofluoresceinografia/métodos , Idoso , Idoso de 80 Anos ou mais , Feminino , Fluoresceína/metabolismo , Humanos , Masculino , Pessoa de Meia-Idade , Tomografia de Coerência Óptica/métodos
20.
J Digit Imaging ; 31(6): 869-878, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29704086

RESUMO

Fundus images obtained in a telemedicine program are acquired at different sites that are captured by people who have varying levels of experience. These result in a relatively high percentage of images which are later marked as unreadable by graders. Unreadable images require a recapture which is time and cost intensive. An automated method that determines the image quality during acquisition is an effective alternative. To determine the image quality during acquisition, we describe here an automated method for the assessment of image quality in the context of diabetic retinopathy. The method explicitly applies machine learning techniques to access the image and to determine 'accept' and 'reject' categories. 'Reject' category image requires a recapture. A deep convolution neural network is trained to grade the images automatically. A large representative set of 7000 colour fundus images was used for the experiment which was obtained from the EyePACS that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorise these images into 'accept' and 'reject' classes based on the precise definition of image quality in the context of DR. The network was trained using 3428 images. The method shows an accuracy of 100% to successfully categorise 'accept' and 'reject' images, which is about 2% higher than the traditional machine learning method. On a clinical trial, the proposed method shows 97% agreement with human grader. The method can be easily incorporated with the fundus image capturing system in the acquisition centre and can guide the photographer whether a recapture is necessary or not.


Assuntos
Retinopatia Diabética/diagnóstico por imagem , Fundo de Olho , Processamento de Imagem Assistida por Computador/métodos , Retina/diagnóstico por imagem , Telemedicina/métodos , Algoritmos , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...